Vapnik–Chervonenkis theory

Vapnik–Chervonenkis theory (also known as VC theory) was developed during 1960–1990 by Vladimir Vapnik and Alexey Chervonenkis. The theory is a form of computational learning theory, which attempts to explain the learning process from a statistical point of view.

VC theory is related to statistical learning theory and to empirical processes. Richard M. Dudley and Vladimir Vapnik himself, among others, apply VC-theory to empirical processes.

VC theory covers at least four parts (as explained in The Nature of Statistical Learning Theory[1]):

In addition, VC theory and VC dimension are instrumental in the theory of empirical processes, in the case of processes indexed by VC classes.

The last part of VC theory introduced a well-known learning algorithm: the support vector machine.

VC theory contains important concepts such as the VC dimension and structural risk minimization. This theory is related to mathematical subjects such as:

References